Loading...
 

Stabilization of the advection-diffusion equations with the residual minimization method

Let us consider the advection-diffusion problem (calculating the propagation of pollutants in a given area by means of diffusion and the advection "wind")
\( b(u,v)=l(v) \\ b(u,v) = \int_{\Omega} \beta_x(x,y) \frac{\partial u(x,y)}{\partial x} dxdy \int_{\Omega } \beta_y(x,y) \frac{\partial u(x,y)}{\partial y } dxdy \\ \int_{\Omega} \epsilon \frac{\partial u(x,y)}{\partial x } \frac{\partial v(x,y)}{\partial x } dxdy +\int_{\Omega} \epsilon \frac{\partial u(x,y)}{\partial y } \frac{\partial v(x,y)}{\partial y } dxdy \\ l(v) = \int_{\Omega} f(x,y) v(x,y) dxdy \)
We denote by \( U \) a set of functions that includes the solution to the advection-diffusion problem \( u \in U \).
Then by \( V \) let's denote the set of functions that we use for testing in our variational (also called weak) formulation.
In this chapter, we will describe, using abstract mathematical language, how the method of minimizing the residual is derived [1].
Our original problem is: We seek \( u \in U \) such that \( b(u,v)=l(v) \) for all test functions \( v\in V \).
We can mark the left-hand side mathematically \( Bu \) and the right-hand side \( l \) by introducing certain mathematical operators (mathematicians will say that \( B : U \rightarrow V' \) and \( \langle Bu, v \rangle = b(u,v) \).
Then our problem comes down to finding the scalar field \( u \) such that
\( Bu-l=0 \). The expression \( r=Bu-l \) it is called a residuum.
If we only have an approximate solution, for example denoted \( u_h \)
then our residuum will express an error, the greater the residuum, the greater the error of the approximate solution \( r_h=Bu_h-l \). How to measure the solution error, i.e. the residuum size?
To measure how big the residuum is, we have to introduce some standard (i.e. a measure of size) \( r_h = \| Bu_h-l \| \). Now we can say that we are looking for such a solution to our problem \( u_h \) so that the residual (or error) is as small as possible. Mathematically, this condition is written in the form of minimizing the norm
\( u_h = \displaystyle{argmin}_{w_h \in U } {1 \over 2} \| Bw_h - l \|_{V' }^2 \)
It writes in before the norm \( 1\over 2 \) and the norm is squared.
The practical problem is that we do not know how to calculate the norm from the difference \( \|Bw_h-l\| \). This standard measures distances in an abstract mathematical space \( V' \) (mathematicians will say that this space is dual to the space of testing functions).
To solve this problem, we perform a math trick of projecting a norm from space \( V' \) to the testing space \( V \). A space projection operator is introduced \( V \) into space \( V' \). This operator is called the Riesz operator \( R_V \colon V \ni v \rightarrow (v,.) \in V' \). The inverse operator projects \( V' \) back into space \( V \), so \( R_V^{-1} \colon V' \ni (v,.) \rightarrow v \in V \). So the application of the Riesz operator projects our problem to the space of test functios \( V \). So we have
\( u_h = \displaystyle{argmin}_{w_h \in U_h} {1 \over 2} \| {\color{red}{R_V^{-1}}} (Bw_h - l) \|_{\color{red}{V} }^2 \). So we are looking for the point where the function reaches its minimum. This point is reached when the derivatives of this function are equal to zero. Since we have a functional here, mathematicians will say we're looking for one \( u_h \) here all Gateaux derivatives are equal to zero in all directions
\( \langle R_V^{-1} (Bu_h - l), R_V^{-1}(B\, w_h) \rangle_V = 0 \quad \forall \, w_h \in U_h \) where \( w_h \) stands for all possible "directions" from a finite dimensional function space \( U_h \) (in which, by the way, we are looking for a solution to our advection-diffusion problem).
Technically speaking, at this point we need to construct a finite dimensional space of functions \( U_h \) in which we will look for our solution. We do this of course in our textbook using the B-spline basis functions, spanning them along the nodal vectors along the x axis and along the y axis.
Along the \( x \) we introduce a knot vector [0 0 0 1 2 3 4 4 4], likewise along the axis \( y \) we introduce a knot vector [0 0 0 1 2 3 4 4 4].
We obtained two bases of one-dimensional B-spline functions \( \{ B_{i,2}(x) \}_{i=1,...,6 } \) and \( \{B_{j,2}(y)\}_{j=1,...,6 } \). Then we will create tensor products from them \( B_{i,j;2}(x,y)=B_{i,2}(x)B_{j,2}(y),i,j=1,...,6 \).
Note that our area \( \Omega \) it is stretched over a square \( [0,1]^2 \), similar to our 6 * 6 = 36 basis functions which results from the definition of the knot vector [0 0 0 1 2 2 3 4 4 4].
Our approximation space is then spanned by the tensor products of our one-dimensional B-spline functions
\( U_h = gen \{ B_{i,j;2}(x,y)=B_{i,2}(x)B_{j,2}(y),i,j=1,...,6 \} \).
So it is a 6 * 6 = 36-dimensional space, So we have 36 directions (B-spline base polynomials) in which we will count directional derivatives and we will equate them to zero.
We mark \( r=R_V^{-1}(Bu_h-l) \)
and then our problem of minimizing the reziduum can be written as \( \langle r , R_V^{-1} (B\, w_h ) \rangle = 0 \quad \forall \, w_h \in U_h \)
which is synonymous \( \langle Bw_h, r \rangle = 0 \quad \quad \forall w_h \in U_h. \)
We now add a second equation using the definition of a residuum \( r=Bu_h-l \), I multiply the residuum definitions by the test functions from the space of test functions and we get the equation:
\( (r,v)_V=\langle B u_h-l,v \rangle \quad \forall v\in V. \)
So we have a system of equations: find a residuum in an infinite dimensional space of test functions \( r\in V \) and solutions in a finite dimensional space of functions \( u_h \in U_h \)
\( (r,v)_V - \langle Bu_h-l ,v \rangle = 0 \quad \forall v\in V \\ \langle Bw_h,r\rangle = 0 \quad \forall w_h \in U_h \)
If we were able to solve this equation, we would get the best possible approximation of our problem in the space of approximating functions \( U_h \). Unfortunately, this is not possible, because the infinite space of test functions generates an infinite number of equations that should be solved. So, we have to choose a second different base to approximate the test space. Here comes the first significant difference between the residual minimization method and the traditional finite element method. In the method of minimizing the residual we have two different spaces, one for approximation of the solution, the other for testing. Of course, we also span the testing space using the B-spline function. For example, we can use the same element patch for the approximation space (but this is not required!) or we can increase the degree of the B-spline function.
We specify what our patch of elements will look like and our B-spline basis functions spread over elements, giving a knot vector along the axis
\( x \) and knot vector along the axis \( y \). Here we refer to the third chapter, which describes the way of defining the base functions using node vectors and B-spline polynomials.
Along the axis \( x \) we introduce a knot vector [0 0 0 0 11 2 2 3 3 4 4 4 4], likewise along the axis \( y \) we introduce a knot vector [0 0 0 0 1 1 2 2 3 3 4 4 4 4]. Generally, the test spaces must be larger than the approximation space. We can achieve this effect by increasing the number of elements or increasing the degree of the B-spline function, or reducing the continuity of the B-spline function, or by mixing the three approaches together.
We obtain two bases of one-dimensional B-spline functions
\( \{ \tilde{B}_{k,3}(x) \}_{k=1,...,10 } \) and \( \{\tilde{B}_{l,3}(y)\}_{l=1,...,10 } \). Then we will create tensor products from them \( \tilde{B}_{k,l;3}(x,y)=\tilde{B}_{k,3}(x)B_{l,3}(y),k,l=1,...,10 \).
Our space testing is \( V_h = gen \{ \tilde{B}_{k,l;3}(x,y)=\tilde{B}_{k,3}(x)B_{l,3}(y),i,j=1,...,10 \} \).
Having a discrete testing space, we finally obtain our discrete (i.e. finite dimensional) system of equations that we want to solve.
We are looking for \( ( r_m, u_h )_{ V_m \times U_h } \)
\( (r_m,v_m)_{V_m} - \langle B u_h-l,v_m \rangle = 0 \quad \forall v\in V_m \\\langle Bw_h,r_m\rangle = 0 \quad \forall w_h \in U_h \)
This is where the so-called inner product appears in a discrete testing space \( (*,*)_{V_m} \).
To guess what dot product to use in our problem, we need to look at our original equation written in the weak form \( b(u,v)=l(v) \\ b(u,v) = \int_{\Omega} \beta_x(x,y) \frac{\partial u(x,y)}{\partial x} v(x,y) dxdy+ \int_{\Omega } \beta_y(x,y) \frac{\partial u(x,y)}{\partial y }v(x,y) dxdy +\\ \int_{\Omega}\epsilon \frac{\partial u(x,y)}{\partial x } \frac{\partial v(x,y)}{\partial x } dxdy +\int_{\Omega} \epsilon \frac{\partial u(x,y)}{\partial y } \frac{\partial v(x,y)}{\partial y } dxdy \\ l(v) = \int_{\Omega} f(x,y) v(x,y) dxdy \)
and see what form the test functions are \( v \) w formie \( b(u,v) \). We see here that we are dealing with partial derivatives \( \frac{\partial v(x,y)}{\partial x } \), \( \frac{\partial v(x,y)}{\partial y } \) and with the function itself \( v(x,y) \). Our inner product should therefore contain derivatives and values of functions
\( (u,v)_{V_m} = \int_{\Omega} u(x,y) v(x,y) dxdy + \int_{\Omega} \frac{\partial u(x,y)}{\partial x } \frac{\partial u(y,y) }{\partial x } dxdy + \int_{\Omega} \frac{\partial u(x,y)}{\partial y } \frac{\partial u(y,y)}{\partial y } dxdy \)
In conclusion, in the residual minimization method, we need to define a separate approximation space, and a separate (larger) testing space, and choose the inner product of the testing space. We then get a system of equations in which the unknown is our solution \( u \) and additionally, a residuum \( r \).
How well the residual minimization method works depends largely on our choice of test space and inner product. If the inner product is sufficiently accurate, and the testing space is large enough, everything will work perfectly, and we will get the best possible solution in the approximation space \( U_h \) (but not better than the approximation space allows). The mathematical justification is that the problem of minimizing the residual with an infinite-dimensional test space realizes the inf-sup stability condition with a constant equal to 1. If the testing space is narrowed down to a finite-dimensional space, the inf-sup condition may no longer be perfectly realized in this space. Increasing the testing space will improve the feasibility of the inf-sup stability condition.
Our system of equations to be solved in the residual minimization method is as follows
\( \left( \begin{matrix} G & B \\ B^T & 0 \\ \end{matrix} \right) \left( \begin{matrix} r \\ u \end{matrix} \right)= \left( \begin{matrix} f \\ 0 \end{matrix} \right) \)

Let's see what our system of equations looks like on the example of specific approximation and testing spaces.

We now define the functions used to approximate the solution and for testing.
We specify what our patch of elements will look like and our B-spline base functions spread over elements, giving a knot vector along the axis \( x \) and knot vector aloing axis \( y \). Here we refer to the third chapter, which describes the way of defining the base functions using node vectors and B-spline polynomials.
Along the axis \( x \) we introduce a knot vector [0 0 0 1 2 3 4 4 4], similarly along axis \( y \) we introduce a knot vector [0 0 0 1 2 3 4 4 4].
We obtained two bases of one-dimensional B-spline functions \( \{ B_{i,2}(x) \}_{i=1,...,6 } \) and \( \{B_{j,2}(y)\}_{j=1,...,6 } \). Then we will create tensor products from them \( B_{i,j;2}(x,y)=B_{i,2}(x)B_{j,2}(y),i,j=1,...,6 \).
We assume our area \( \Omega \) is span on a square \( [0,1]^2 \), similar to our 6 * 6 = 36 basis functions which results from the definition of the knot vectors [0 0 0 1 2 2 3 4 4 4].

Now note that the inner product matrix \( G \) it is stretched over the testing space. Also note that our problem matrix \( B \) it is spread over rows from the testing space and columns from the approximation space.
Let us first write the matrix of our problem \( B \) .
\( B1_{i,j=1,...,6;k,l=1,...,10} = \int_{\Omega} \beta_x \frac{\partial B_{i,2}(x) }{\partial x } B_{j,2}(y) \tilde{B}_{k,3}(x)\tilde{B}_{l,3}(y) \)
\( B2_{i,j=1,...,6;k,l=1,...,10} =\int_{\Omega} \beta_y B_{i,2}(x)B_{j,2}(y) \tilde{B}_{k,3}(x)\frac{\partial \tilde{B}_{l,3}(y)}{\partial y } dxdy \)
\( A1_{i,j=1,...,6;k,l=1,...,10} =\int_{\Omega } \frac{\partial \epsilon B_{i,2}(x)}{\partial x} B_{j,2}(y) \frac{\partial \tilde{B}_{k,3}(x)}{\partial x} \tilde{B}_{l,3}(y) dxdy \)
\( A2_{i,j=1,...,6;k,l=1,...,10} =\int_{\Omega} \epsilon B_{i,2}(x)\frac{\partial B_{j,2}(y)}{\partial y}\tilde{B}_{k,3 }(x)\frac{\partial \tilde{B}_{l,3 }(y)}{\partial y} dxdy \)
and the right side
\( F_{k,l=1,...,10}= \int_{\Omega} f(x,y) \tilde{B}_{k,3}(x) \tilde{B}_{l,3}(y)dxdy \)
Our matrix looks like this \( B = A1+A2+B1+B2 \)
Note that now it is NOT a square matrix (we have a different number of rows and columns). Let us first write the matrix of our problem \( G \).
\( G1_{i,j=1,...,10;k,l=1,...,10} =\int_{\Omega }\tilde{B}_{i,3}(x)\tilde{B}_{j,3}(y) \tilde{B}_{k,3}(x) \tilde{B}_{l,3 }(y) dxdy \)
\( G2_{i,j=1,...,10;k,l=1,...,10} =\int_{\Omega }\frac{\partial \tilde{B}_{i,3}(x)}{\partial x} \tilde{B}_{j,3}(y) \frac{\partial \tilde{B}_{k,3}(x)}{\partial x} \tilde{B}_{l,3 }(y) dxdy \)
\( G3_{i,j=1,...,10;k,l=1,...,10} =\int_{\Omega }\tilde{B}_{i,3}(x)\frac{\partial \tilde{B}_{j,3}(y)}{\partial y} \tilde{B}_{k,3}(x) \frac{\partial \tilde{B}_{l,3 }(y)}{\partial y } dxdy \)
Our matrix looks like this
\( G=G1+G2+G3 \)

Note that now it IS a square matrix with the number of rows and columns corresponding to the size of the testing space.

Ostatnio zmieniona Środa 06 z Październik, 2021 07:03:18 UTC Autor: Maciej Paszynski
Zaloguj się/Zarejestruj w OPEN AGH e-podręczniki
Czy masz już hasło?

Hasło powinno mieć przynajmniej 8 znaków, litery i cyfry oraz co najmniej jeden znak specjalny.

Przypominanie hasła

Wprowadź swój adres e-mail, abyśmy mogli przesłać Ci informację o nowym haśle.
Dziękujemy za rejestrację!
Na wskazany w rejestracji adres został wysłany e-mail z linkiem aktywacyjnym.
Wprowadzone hasło/login są błędne.